PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks

نویسندگان

چکیده

Large capacity deep learning models are often prone to a high generalization gap when trained with limited amount of labeled training data. A recent class methods address this problem uses various ways construct new sample by mixing pair (or more) samples. We propose PatchUp, hidden state block-level regularization technique for Convolutional Neural Networks (CNNs), that is applied on selected contiguous blocks feature maps from random Our approach improves the robustness CNN against manifold intrusion may occur in other state-of-the-art approaches. Moreover, since we block features space, which has more dimensions than input obtain diverse samples towards different dimensions. experiments CIFAR10/100, SVHN, Tiny-ImageNet, and ImageNet using ResNet architectures including PreActResnet18/34, WRN-28-10, ResNet101/152 show PatchUp upon, or equals, performance current regularizers CNNs. also can provide better deformed robust adversarial attacks.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convolutional neural networks with low-rank regularization

Large CNNs have delivered impressive performance in various computer vision applications. But the storage and computation requirements make it problematic for deploying these models on mobile devices. Recently, tensor decompositions have been used for speeding up CNNs. In this paper, we further develop the tensor decomposition technique. We propose a new algorithm for computing the low-rank ten...

متن کامل

Max-Pooling Dropout for Regularization of Convolutional Neural Networks

Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advoc...

متن کامل

Stochastic Pooling for Regularization of Deep Convolutional Neural Networks

We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined wi...

متن کامل

Feature-based Attention in Convolutional Neural Networks

Convolutional neural networks (CNNs) have proven effective for image processing tasks, such as object recognition and classification. Recently, CNNs have been enhanced with concepts of attention, similar to those found in biology. Much of this work on attention has focused on effective serial spatial processing. In this paper, I introduce a simple procedure for applying feature-based attention ...

متن کامل

Feature Representation in Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are powerful models that achieve impressive results for image classification. In addition, pre-trained CNNs are also useful for other computer vision tasks as generic feature extractors [1]. This paper aims to gain insight into the feature aspect of CNN and demonstrate other uses of CNN features. Our results show that CNN feature maps can be used with Random...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i1.19938